DiscoverHidden Layers: AI and the People Behind ItAnthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39
Anthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39

Anthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39

Update: 2025-04-09
Share

Description

In this episode of Hidden Layers, Ron Green talks with Dr. ZZ Si, Michael Wharton, and Reed Coke about recent AI developments. They cover Anthropic’s work on Claude 3.5 and model interpretability, OpenAI’s GPT-4 image generation and its underlying architecture, and a new approach to latent reasoning from the Max Planck Institute. They also discuss synthetic data in light of NVIDIA’s acquisition of Gretel AI and reflect on the delayed rollout of Apple Intelligence. The conversation explores what these advances reveal about how AI models reason, behave, and can (or can’t) be controlled.

Comments 
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Anthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39

Anthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39